Easy-First Dependency Parsing with Hierarchical Tree LSTMs
نویسندگان
چکیده
منابع مشابه
Easy-First Dependency Parsing with Hierarchical Tree LSTMs
We suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders. To demonstrate its effectiveness, we use the representation as the backbone of a greedy, bottom-up dependency parser, achieving state-of-the-art accuracies for English and Chinese, without relying on external word embeddings. The parser’s implementation is...
متن کاملDependency Parsing with LSTMs: An Empirical Evaluation
We propose a transition-based dependency parser using Recurrent Neural Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward neural network parser of Chen and Manning (2014) and enables modelling of entire sequences of shift/reduce transition decisions. On the Google Web Treebank, our LSTM parser is competitive with the best feedforward parser on overall accuracy and n...
متن کاملEasy-First Dependency Parsing of Modern Hebrew
We investigate the performance of an easyfirst, non-directional dependency parser on the Hebrew Dependency treebank. We show that with a basic feature set the greedy parser’s accuracy is on a par with that of a first-order globally optimized MST parser. The addition of morphological-agreement feature improves the parsing accuracy, making it on-par with a second-order globally optimized MST pars...
متن کاملEasy-First POS Tagging and Dependency Parsing with Beam Search
In this paper, we combine easy-first dependency parsing and POS tagging algorithms with beam search and structured perceptron. We propose a simple variant of “early-update” to ensure valid update in the training process. The proposed solution can also be applied to combine beam search and structured perceptron with other systems that exhibit spurious ambiguity. On CTB, we achieve 94.01% tagging...
متن کاملGreedy Transition-Based Dependency Parsing with Stack LSTMs
We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Transactions of the Association for Computational Linguistics
سال: 2016
ISSN: 2307-387X
DOI: 10.1162/tacl_a_00110